Multi-level complexities in technological development: Competing strategies for drug discovery
نویسندگان
چکیده
Drug development regularly has to deal with complex circumstances on two levels: the local level of pharmacological intervention on specific target proteins, and the systems level of the effects of pharmacological intervention on the organism. Different development strategies in the recent history of early drug development can be understood as competing attempts at coming to grips with these multi-level complexities. Both rational drug design and high-throughput screening concentrate on the local level, while traditional empirical search strategies as well as recent systems biology approaches focus on the systems level. The analysis of these strategies reveals serious obstacles to integrating the study of interventive and systems complexity in a systematic, methodical way. Due to some fairly general properties of biological networks and the available options for pharmaceutical intervention, drug development is captured in an obstinate methodological dilemma. It is argued that at least in typical cases, drug development therefore remains dependent on coincidence, serendipity or plain luck to bridge the gap between (empirical and/or rational) development methodology and actual therapeutic success. 1. An obstinate dilemma in early drug development For the successful development of technology, it is often crucial to pay close attention to a number of different complex circumstances. Development can be particularly demanding if the complexities lie on different levels of a system. For instance, the design of a technology can require, first, that the technological artifact intervenes in a highly sophisticated way on a specific part of a system. Second, this intervention might have to produce effects in the system as a whole that are mediated by complex systems mechanisms. In such a case, complexities on two levels have to be dealt with: complexity of (local) intervention and complexity of systems effects (or systems complexity). For drug development, both types of complexity play a major role. Pharmaceuticals typically act on target proteins, such as enzymes, receptors or ion channels. The interaction between (potential) drugs and these molecular targets is often highly complex and subject to comprehensive theoretical modeling, chemical design and empirical testing in drug development. This problem is local in nature, being associated with specific proteins and their manipulation by drug molecules. At the same time, the intended and unintended effects of pharmacological interventions lie on the level of entire systems such as cells, organs or the organism Matthias Adam: Multi-level compexities in technological development 2 as a whole. These global effects are often mediated by complex networks of mechanisms and can be extremely difficult to predict from the local interventions. Yet, successful and efficient drug development demands that the two levels of complexity are accounted for in the development process from early on. In this paper, I will trace the relevance of these two levels through the recent history of early drug development. A number of different strategies has been developed that can be understood as competing attempts at coming to grips with either or both levels of complexity. Traditionally, empirical search strategies dominated much of drug discovery, as large numbers of randomly chosen substances or chemical modifications of existing drugs were tested in animal models. Novel drugs were typically identified empirically on the basis of their observable systems effects, independent from a scientific understanding of the underlying molecular interventions. Yet, particularly since the 1970s, enormous progress in biochemistry and molecular biology initiated a fundamental reorientation of pharmaceutical development. From the 1980s onwards, a more science-based paradigm of drug discovery was widely adopted under the heading of rational drug design. In particular, the chemical interactions of drug candidates with target proteins became subject to close scrutiny, aiming at a targeted design of novel drugs with well-defined molecular action. Despite some important successes, the overall efficiency of the rational drug design paradigm for drug discovery remained contested. As a consequence, empirical search strategies returned to the focus of attention of much of pharmaceutical research. Since the early 1990s, high-throughput screening was developed as a highly efficient method to test ever larger numbers of substances empirically. Yet, both rational drug design and high-throughput screening are to a large degree concerned with local problems, i.e. with the activity of drug candidates (‘lead substances’) on their targets and with the optimization of their local activity profile. Since the millennium, growing concern on this overall orientation of drug discovery can be observed. More holistic approaches to drug discovery based on the emerging systems biology have been proposed instead. They aim at focusing more closely on the relevant systems properties from early on. The analysis of these competing strategies reveals a serious dilemma for early drug development. Rational drug design and high-throughput screening share a strong focus on singular molecular targets. With these methods, early drug design thus concentrates almost exclusively on interventive complexity. In contrast to this, the systems biology approaches primarily pay attention to whole systems and thus to systems complexity. The systems effects of pharmaceutical interventions had also been the major point of reference for traditional drug discovery. A closer look at these different strategies shows that there are serious obstacles to integrating the study of interventive and systems complexity in a systematic, methodical way. Matthias Adam: Multi-level compexities in technological development 3 For instance, the methods of rational drug design and high-throughput screening often presuppose that the interventive target is isolated from its natural context to a high degree. It seems therefore inevitable that these approaches do not give equal prominence to the study of systems effects. In contrast to this, the systems biology strategies focus on systems complexity. Yet, it is unclear how these approaches could retain a sufficient grip onto the details of drugtarget interaction and thus on interventive complexity (see sect. 4). I will argue that due to some fairly general properties of biological networks and the available options for pharmaceutical intervention, drug development is captured in a methodological dilemma that is considerably obstinate. In general, since systems effects remain largely unpredictable from local interventions, the modeling and testing of drug-target interactions cannot ‘reach up’ to systems effects, while the investigation of systems effects cannot be tracked down to a molecular level on which it could direct the chemical design of drugs. This dilemma sets limits on the degree to which drug discovery and development can be turned into a systematic enterprise at all, for instance by being guided by a scientific understanding of underlying mechanisms or by exploring the options for pharmaceutical intervention in a methodical way. There are thus reasons to assume that at least in typical cases, drug development remains dependent on coincidence, serendipity or plain luck to bridge the gap between (empirical and/or rational) development methodology and actual therapeutic success. 2. Rational drug design and its limits Traditional drug discovery up to the 1970s drew largely on empirical search and serendipity. Serendipitous findings are usually understood as useful hints or results gained in investigations that were originally directed at something rather different. Hugo Kubinyi compiled a list of more than 50 examples where important biological activities of substances were found serendipitously (Kubinyi 1999). In many cases, the substances were intended for quite different uses, and their pharmacological potential surfaced unexpectedly in the course of the studies or was found by accident. Among the results from such findings are many ground-breaking drugs, such as the first antipsychotic chlormoprazine, the anticoagulant agent warfarin, or the first platinum-based anticancer-drug cisplatin. Beyond serendipity, methods of empirical search played a major role in most cases of traditional drug development. In a comprehensive study of the development of innovative drugs between the 1940s and the 1970s, Robert A. Maxwell and Shoreh B. Eckhardt found that screening contributed to the development of 25 drugs out of their 32 cases (Maxwell and Eckhardt 1990, 394). Within the screening contributions, the authors distinguish untargeted from targeted screening. For untargeted screening, a random selection of substances is tested Matthias Adam: Multi-level compexities in technological development 4 for pharmacological activity in a biological test system (such as a cellor organ-based assay or an animal model). Such substances can come from the historical libraries that pharmaceutical companies have assembled throughout their history, they might be gained from natural extracts, or they can be the result of unsystematic chemical synthesis (Adam 2008a). For random screening, no prior clues are presupposed that test substances might be useful. Instead, random screening aims at the chance identification of such substances in the first place. In comparison to this, targeted screening already starts from a prototype substance with some known pharmacological features. By way of chemical variation and testing, drug researchers aim to find derived substances with improved or modified characteristics that suit their purposes. For instance, one might try to improve the selectivity of the substance or to increase its potency. Often in these cases, the screening is iterated: from the variants of the original prototype, the most promising substances can be selected as starting points for further variation (Adam 2008a). Empirical search strategies such as random or targeted screening and iterative trial and error are widespread in technology development (Pitt 2001; Thomke et al. 1998; Vincenti 1990, 159-166). Usually, their role is compensatory: they are used because there is not sufficient information on the mechanisms underlying the technological intervention that could guide the design process in a targeted way. In traditional drug development, the situation was similar. Often, the protein targets were not known at all or had not been singled out when the search began, or if they were known, there was no specific information on their chemical and spatial structure that could direct the synthesis of potential drugs. In addition, there was often insufficient knowledge of the network of mechanisms that led to the disease. Empirical search by screening substances for their systems effects could be conducted even if the target of pharmacological intervention and the pathologically relevant system were only poorly understood (Maxwell and Eckhardt 1990, 409). The typical approach both to interventive and systems complexity in traditional drug development was thus to find new drugs more by empirical trial and error rather than through scientific understanding. The prospects of a more rational approach to drug development were discussed in the drug discovery community at least since the late 1960s. Such discussions were inspired by some (at their time) exceptional cases in which a more targeted, knowledge-driven development process was claimed to have been realized (Hitchings 1969, Belleau 1970, Adam 2005). However, a realistic perspective for rational drug design as a standard approach emerged only with important scientific advances in biochemistry, molecular biology and gene technology during the 1970s. In the course of the so-called ‘biotechnological revolution’, the background and methods available to drug discovery developed considerably. More and more potential Matthias Adam: Multi-level compexities in technological development 5 molecular targets became known; through gene-technological cloning and expression of human protein in bacteria or yeasts, these targets were available for research and empirical testing; advances in X-ray-crystallography paved the way for the elucidation of the three-dimensional molecular structure of such proteins. As a consequence, a systematic epistemic access to protein targets and their chemical interactions with potential drugs came into reach of drug development. It therefore became conceivable to design drugs in their chemical structure on the basis of detailed knowledge of a target protein, its structure and its biological function. A prominent example for rational design of the 1970s is the development of the antihypertensive drug captopril. As one of the first cases, information gained through crystallographic studies was successfully used to model in spatial and chemical detail the active site of the target and its interactions with ligands. The chemical design of the drug was directly guided by this information. Altogether, rational drug design promises particularly to tackle the complexity of intervention in a targeted way. Based on detailed knowledge of molecular structures and an understanding of drug-target interactions, the optimal chemical design of drugs is sought to be identified. The modeling of the spatial relations and chemical interactions between drug and target therefore often stand at the center of the studies. In methodological respect, rational drug design aims to integrate scientific research into molecular mechanisms and chemical structures with the development of useful therapeutics. It is essential for the approach that inferential relations can be established between molecular knowledge and the chemical design of drugs. Yet, rational design is not a purely deductive approach. The identification of promising targets and the elucidation of their molecular properties often go hand in hand with empirical tests of drug candidates. Guidance from existing scientific knowledge is then supplemented by specific information that is gained through the development process. Repeatedly, targeted screening remains important for these purposes. If the inferential relations are sufficiently close, the fundamental knowledge can both contribute to the development process and be supplemented or confirmed through empirical testing (Adam 2005). The methods of structure-based rational development were broadly adopted in the pharmaceutical industry. In the early years, structural information for the most part still relied on homologs (Congreve et al. 2005). For instance, the developers of captopril could not make use of direct structural information on their protein target ACE (angiotensin converting enzyme), but used the available information on a related bovine enzyme instead (Cushman and Ondetti 1991). Already in the mid-1980s, however, the development of the first next-generation antihypertensive drug, losartan, included direct structural information on its target angio1 Cushman and Ondetti 1991; for a detailed reconstruction of the case and the role of the interaction model, see Adam 2005. Matthias Adam: Multi-level compexities in technological development 6 tensin II (Adam 2005), while in 1989, the complete structure of HIV protease became accessible and was subsequently used for the development of HIV protease inhibitors (Congreve et al. 2005). The number of protein structures published in the Protein Data Bank grew exponentially since the 1970s, from a total of 70 substances in 1970 via 500 in 1980 and 13,600 in 2000 to 48,000 in 2007 (PDB 2008). Rational drug design raised high expectations among drug researchers in the 1980s. It was hoped by many that it could substantially reduce the dependence on chance or serendipitous findings, paving the way to a much more orderly and predictable development process (Drews 1999, 121-122). In addition, many pharma managers believed that cutting-edge scientific research became indispensable to develop innovative new drugs. As a consequence, drug development in the pharmaceutical industry became much more science-oriented than it was before. As one drug researcher described the situation in the early 1980s, “it was fashionable to invest in basic research, so we did” (Cockburn et al. 1999). Such expectations and the corresponding changes in research management seem quite natural in a situation in which a science-based, rational development process emerges as a serious alternative to existing, chancedependent methods of empirical search. In general, however, it is enormously difficult in drug development to assess the success of such strategic decisions in a timely manner, since it regularly takes more than 10 years until decisions on research approaches and development technology have effects on the clinical introduction of new drugs (Schmid and Smith 2004). In Kubinyi’s view, “the drug discovery scene is covered with a mist of myths, hype and false conclusions ... Whenever a new concept or technology emerges, people get excited, jump on it and expect that new drugs will result more or less automatically” (Kubinyi 2003, 665). In fact, when one takes stock of the results that rational drug design has delivered in the two decades after its broad adoption in the 1980s, the evaluation of the approach turns out to be rather mixed. Rational methods have been widely adopted in the pharmaceutical industry for two main purposes: the optimization of existing lead substances (i.e. the chemical modification of existing prototypes with the aim to improve, e.g., potency, selectivity or pharmacokinetic properties), and the discovery of new lead substances. There is a broad consensus that rational methods of modeling drug-target interactions have contributed on a broad scale to lead optimization, and that rational methods have a huge impact on this step of drug development (Hardy and Malikayil 2003; Congreve et al. 2005). In contrast to this, the outcome with respect to lead discovery is rather modest. On the one hand, Hardy and Malikayil have identified more than 40 substances in clinical development which have been discovered with the help of rational, structure-guided methods (Hardy and Malikayil 2003; similarly Kuhn et al. Matthias Adam: Multi-level compexities in technological development 7 2002). On the other hand, only a relatively small number of these substances have so far been brought to the clinic. Tom L. Blundell and co-workers have identified only ten drugs that have emerged from structure-guided design (Congreve et al. 2005). Since there are three HIV protease inhibitors and two neuraminidase inhibitors among them, these drugs altogether address only seven different protein targets. Even if Blundell might have used rather strict criteria for inclusion in the list, it indicates that there remain serious difficulties for the rational design of novel drugs. Some of the difficulties for rational drug design are exemplified by an important method of rational target identification, so-called ‘virtual screening’. In virtual screening, the binding and affinity of potential drugs to target proteins is assessed computationally. This is a rational method for lead discovery since the binding properties are predicted on the basis of structural information on the target protein (Klebe 2006). The very approach to screen for leads computationally already shows that the chemical design of new lead substances often cannot be derived directly from the structure of the target protein alone. Instead, given compounds are checked ‘in silicio’, i.e. by computer simulation, to determine whether they would bind to the target. The method of virtual screening itself faces two major problems, the docking and the scoring problem. The docking problem concerns the task to correctly predict the binding orientation of the substance in the active site of the target. Conformational flexibility both of the test compound and the target protein often complicates this task considerably. Still, the best docking programs correctly dock about 70-80% of compounds (Congreve et al. 2005, 899; Klebe 2006, 582-583). The scoring problem concerns the task of comparing the substances that dock to a given target with respect to their affinity. The aim of scoring is to identify the substances that bind with the highest affinities to the target and are therefore most promising as lead substances. To deal with this problem, one has to calculate the strengths of the chemical bonds between compounds and target. In principle, these could be calculated from first principles based on quantum mechanics or approximate force fields. Yet, according to Klebe, these calculations are computationally so demanding that “screening large samples of docked solutions to estimate binding affinities is still far beyond tractability” (Klebe 2006, 588). Therefore, empirical scoring functions are typically used which are derived from empirically determined affinities. These functions have often only limited generality, making the accuracy of the scoring dependent on the relevance of the empirical reference set. In addition, many fundamental phenomena of compound-target-binding are not yet sufficiently understood, such as the role of water molecules or changes in protonation states. Such phenomena 2 Not included are, for instance, tegaserod (Buchheit et al. 1995), as well as many „me-too“ drugs, i.e. more or less close followers to existing drugs, which are regularly designed on the basis of detailed molecular knowledge. Matthias Adam: Multi-level compexities in technological development 8 are highly relevant for binding and would therefore have to be included in a comprehensive model of drug-target interaction. Yet, they require more fundamental research before they can be taken into account (Klebe 2006, 589; Kubinyi 2003, 667). This shows how difficult it is in drug development to derive useful hints on potential drugs even from detailed knowledge of the interventive target: a direct derivation of the chemical structure of drugs is often out of reach; a simulation of the affinity of given substances has to be based on empirical generalizations of limited range; important mechanisms of drugtarget interaction are still little understood. Altogether, the promise of identifying optimal drug molecules on the basis of detailed molecular knowledge of protein targets turns out to be fairly difficult to fulfill (cp. Drews 1999, 122). More often than not, the inferential relations between existing fundamental molecular knowledge and drug design are insufficiently tight to allow for far-reaching rational guidance at least of lead identification. There thus remains a considerable gap in pharmacology between fundamental knowledge and technology development (Adam 2008). Yet, so far this only shows that from a basic scientific point of view, pharmaceutical interventions into the organism are complex indeed. 3. High-throughput screening as alternative and complement to rational design While with the trend towards rational drug design in the 1980s traditional screening approaches tended to be supplanted by more knowledge-based strategies, random empirical search returned forcefully to the focus of attention of drug development early in the 1990s. This apparent relapse in research methodology was largely driven by dramatic improvements in the test technology. Test efficiency was highly increased and costs of testing accordingly reduced. From about 1991, the approach was called ‘high-throughput screening’ (Burch and Kyle 1991). For a considerable time, high-throughput screening was considered mainly as an alternative (and competitor) to rational design. Yet, since about 2000, the two approaches are more often taken to be complementary (Good et al. 2000; Ratti and Trist 2001), and they can actually be combined efficiently to deal with interventive complexity. Random screening in traditional drug discovery was often based on organ or animal models. The use of these models was increasingly criticized not only due to ethical concerns, but also because of low success rates and high costs (cp. Chabner and Robert 2005; Böhm et al. 1995, 138-139 and 434-435). The return of random screening was enabled by the miniaturization of the experiments and their comprehensive automation (Burch and Kyle 1991). Stateof-the-art high-throughout screening (as of 2005) tests compounds against the isolated target protein wherever possible. Several hundred tests are performed in parallel in small wells on one plate. The protein target, the test substances, and any additional assay substances are Matthias Adam: Multi-level compexities in technological development 9 added automatically, and the results are read out directly from the plate. The assay technology is optimized so as to allow the whole test process to be performed without intermediate steps of separation and washing. Typically, test substances are retrieved from the library and dissolved automatically. In addition to existing substance libraries, combinatorial chemistry provides large numbers of new substances by synthesizing them mechanically from a set of chemical building blocks. According to Schering researchers Oliver von Ahsen and Ulf Bömer, a throughput of up to 100,000 substances per day can be achieved once the assay for the screening campaign has been developed and validated. Regularly, libraries of up to one million substances are then screened (von Ahsen and Bömer 2005). From an epistemological perspective, the high degree of isolation of the interventive target from its natural environment is particularly noteworthy. According to von Ahsen and Bömer, biochemical assays which make use only of the isolated target protein form the “gold standard” in high-throughput screening, and are also preferred to cell-based assays (von Ahsen and Bömer 2005, 481-482). The aim of high-throughput screening is to identify substances with a specific molecular activity, e.g. the inhibition of a certain enzyme, and the assay is specifically designed to detect exactly such substances. High-throughput screening therefore presupposes that protein targets are known, that their therapeutic potential is validated and that they are available for experimentation. In contrast to traditional empirical screening in organ or animal models, the experimental set-up excludes the identification of substances with unknown targets as well as serendipitous discoveries of biological effects. Instead, the results of a high-throughput screening campaign typically feed into an analysis of the structural and chemical features of substances with the sought for molecular action and of drug-target interaction. High-throughput screening thus not only seeks to identify promising lead substances, but also collects knowledge on the complex pharmaceutical intervention (Adam 2008a). Yet, while rational design aims to infer the solution to the complex problem from a fundamental understanding, high-throughput screening attempts to optimize the odds for finding promising solutions by chance. Despite of the opposing methodology of rational design and high-throughput screening, both approaches often contribute jointly to the chemical and structural modeling of drug target-interaction and the discovery and optimization of drug candidates. Regularly, there are parallel efforts to elucidate a protein structure, to identify promising substances by virtual screening and to develop assays for high-throughput screening (Good et al. 2000; Ratti and Trist 2001; Schwardt et al. 2003). In various ways, the results from each approach can enrich the other, for instance when findings from virtual screening are tested experimentally, when structural conditions for affinity determined by screening are included in the chemical modelMatthias Adam: Multi-level compexities in technological development 10 ing, or when the molecular structure of complexes of the target with empirically discovered ligands are determined. The combination of rational and empirical methods is particularly close in so-called fragment-based screening. The idea here is first to identify, with high-throughput screening, small molecule ligands (with molecular weights below 200 – 300 Da) that bind to the target protein. Subsequently, their mode of interaction with the target is elucidated with nuclear magnetic resonance or crystallography. Suitable fragments are then linked, by way of rational design, to larger, drug-size compounds, whose affinity is then again tested empirically. Step by step, lead substances can thus be developed by combining chance empirical findings and structure-based design. Many examples of successful application of fragment-based screening both to lead discovery and to lead optimization have been cited (Erlanson 2006). Rational design and high-throughput screening can thus be used as complementary approaches in early drug development. The two approaches can be readily integrated because they are both concerned with drug-target interaction. Both methods concentrate on molecular structure and action in isolation and largely ignore the wider biological context. This allows for an efficient combination of capacities in experimental search and knowledge-based modeling. 4. Systems biology challenges mainstream drug discovery The common focus of rational drug design and high-throughput screening is on interventive complexity. However, this concentration of early pharmaceutical research on drug-target interaction has increasingly been challenged. In particular, a provocative paper by biotechnology pioneer David F. Horrobin from 2003 triggered a broad debate in the drug research community on the overall orientation of pharmaceutical research (Horrobin 2003). Horrobin argued that to a large degree, current research resembles a Glasperlenspiel, i.e. a game which is intellectually demanding and internally consistent, yet carries little relevance for real medical problems. Many authors took up Horrobin’s critique and added their view of how pharmaceutical research is misled by its strong emphasis on interventive complexity (Butcher 2005, van der Greef and Mc Burney 2005, Kitano 2007, Kubinyi 2003, Shaffer 2005). Much of the impetus of the critique comes from the widely shared perception that at least since the mid1990s, the pharmaceutical industry has been going through a severe productivity crisis. Despite of an exponential growth of the expenditures for drug research over recent decades, the number of new substances that were approved each year by the U.S. Food and Drug Administration as novel pharmaceuticals decreased from 53 to 22 between 1996 and 2006 (FDA 2007, Nightingale and Martin 2004). Among these new drugs, only two or three substances each Matthias Adam: Multi-level compexities in technological development 11 year actually addressed new molecular targets (Congreve et al. 2005). It is a shared assumption of the contributors to the debate that drug discovery can only be “rescued” through a fundamental reorientation. They propose various new approaches to drug discovery that have in common that they are inspired by systems biology. The critique of the prevalent paradigms of drug discovery and the arguments in favor of the new systems-biology approaches shed light on the relation between interventive and contextual complexity in pharmaceutical research. They show, on the one hand, that the focus of early drug development on isolated targets and their interaction with drug candidates, characteristic for drug development since the 1980s, has led to a neglect of the complex causal context into which one aims to intervene. On the other hand, while the systems biology proposals pay closer attention to contextual complexity, it is not yet clear how the existing high standards of investigation into drug-target interactions can be maintained in this approach. The discussions surrounding the systems-biology approaches thus indicate which obstacles exist for combining the study of contextual and interventive complexity. The central critique that is leveled by the systems biology camp against mainstream pharmaceutical research is that it follows a “reductionist” approach (Horrobin 2003, 153; van der Greef and Mc Burney 2005, 961; Kubinyi 2003, 665). This reductionism is taken to become manifest in two respects: in the concentration of drug discovery (and biomedical research in general) on the study of isolated components of the organism; and in its orientation on single targets as the objects of pharmaceutical intervention. Altogether, mainstream drug researchers follow the ideal of finding a single locus that plays a central pathophysiological role, and which is to be manipulated by a precise pharmaceutical intervention. From the perspective of the systems biology approaches, this ideal is fundamentally mistaken. According to Horrobin, the study of the biochemistry of cells in vitro can reveal properly only what he calls the anatomical biochemistry, but not the functional biochemistry. We might find out which biochemical steps are present and therefore which pathways are possible. Yet, he argues, which pathways are actually instantiated in vivo depends on the natural context, including the circulation, the nutrient and oxygen supply, and the environment of hormones, which is lacking in in vitro studies. In addition, the effects of keeping cell cultures in antibiotics or in a lipid environment that differs from the natural environment are not sufficiently taken into account. If, as Horrobin claims, most diseases are based on defects in the functional rather than the anatomical biochemistry, the information that is most relevant for the development of pharmaceuticals cannot be gained through de-contextualized investigations (Horrobin 2003, 152-153). Matthias Adam: Multi-level compexities in technological development 12 With respect to targets, Hugo Kubinyi points out that many important drugs possess rather unspecific profiles of action and are effective through a balanced effect on several targets (Kubinyi 2003, 665). In addition, many drugs act indirectly, often at a distant site, rather than on targets that are central to the pathophysiology. In some cases such as the cholesterollowering statins, drugs interfere with a variety of mechanisms, and it seems surprising from the perspective of mainstream drug development that they do not have more severe side effects. More generally, it is argued that the prediction of the biological action of a substance on the basis of its activity on targets in isolation is highly problematic. Already the validation of the targets, i.e. the verification of the crucial role of the target in the pathology, is very difficult. The use of gene-knockout-mice for this purpose often delivers inconsistent results already in different strains of mice, and also the transferability of results from mice to humans is uncertain. The concentration of early drug development on substances that selectively address single targets, as practiced by rational drug design and high-throughput screening, is therefore criticized as being too narrow and of doubtful relevance (Horrobin 2003, 152; Butcher 2005, 461). It is a central objection of the systems biology proponents against the established approaches that these cannot do justice to the complexity of biological systems. Various dimensions of complexity are adduced (van der Greef and McBurney 2005, 961). In particular, the robustness and fragility of mechanistic networks is taken to be a feature of complex biological systems that is important for drug action. According to Hiroaki Kitano, complex biological systems are robust against a broad range of perturbations and therefore also against many modes of pharmaceutical intervention (Kitano 2007). Robustness can be based, for instance, on negative feedback loops such that if a drug changes the level of some molecule, a feedback loop compensates for this change. Alternatively, due to fail-safe mechanisms, systems can continue to function even if a pathway is blocked by a drug, since alternative pathways take up its role. According to Kitano, the robustness of a system typically comes along with specific points of fragility, such as systems control for feedback loops. If drugs perturb such points, they can have great efficacy (or cause serious side effects). Kitano argues that from the perspective of systems biology, many of the current drugs appear rather exceptional in that one substance successfully targets a point of fragility of a system. Diseases such as cancer, diabetes or autoimmune disorders prove difficult to treat pharmaceutically in this way. He attributes these difficulties to the robustness of the human organism on different levels, which requires more complex modes of pharmaceutical intervention. As one step to take into account the robustness and fragility characteristics of complex biological systems, he proposes testing combination therapies much more systematically. If Matthias Adam: Multi-level compexities in technological development 13 his account of robustness is correct, one could expect that the effect of two drugs in combination can be much larger than the addition of their single effects. For instance, a neutralization effect due to a fail-safe mechanism could be eliminated by blocking the alternative mechanism as well. Jan van der Greef and Robert McBurney (2005) propose the use of what they call “systems response profiles” in drug development. To be able to evaluate the effect of drugs on the organism as a whole, a wide range of molecular parameters from bodily fluids, cells or tissues are to be assembled with bioanalytical techniques. Diseases are analyzed through how they change these parameters. Drug development would then be aimed at substances that restore the profile of the healthy state. Drug candidates are evaluated for the changes they induce in the parameters, i.e. for their system response profile. According to van der Greef and McBurney, system response profiles could be used, for instance, to systematically identify promising combination therapies on the basis of the drugs’ individual system response profiles and the profile of the disease. The approach grounds drug development on molecular information, yet rather than focus on specific targets, it is directed at the overall effect of drug candidates on the organism. Another approach, developed by Eugene Butcher (2005), also concentrates on the biological effects of test substances rather than on the manipulation of specific targets. Butcher proposes using cell systems to screen for novel drugs. Such cell systems consist of various cell types and incorporate many of the pathways that are taken to be central to the disease. They are intended to model the disease biology and to mimic its complexity. Butcher takes it to be a main advantage of the method to be able to screen against a broad range of potential targets which do not have to be identified or validated in advance. He claims that compared to this approach, screening against single targets as practiced in high-throughput screening is too costly and slow. In addition, with cell systems assays, potential drugs can be selected for their biological effects, which he takes to increase the chances of finding unexpected modes of pharmacological intervention. It is claimed to be a major advantage of the approach to allow for such serendipitous findings (Butcher 2005, 463). To a considerable degree, the proposed alternative approaches to mainstream drug discovery revive methods of traditional early drug development, such as biological screening (Butcher 2005), classical medicinal chemistry (Kubinyi 2003), and clinical observation (Horrobin 2003). Accordingly, the proposals have been advertised as a “return to the fundamentals of drug discovery” (Williams 2004). This does not mean, of course, that they actually return to the methods of the 1970s rather than using the techniques of modern drug development. They build, e.g., on efficient screening methods (Butcher 2005) or molecular analytics (van Matthias Adam: Multi-level compexities in technological development 14 der Greef and McBurney 2005), and some proponents also intend to integrate rational methods of modeling drug-target interaction (Kubinyi 2003). Still, the contrast both to existing rational design and to high-throughput screening approaches is striking. For instance, while detailed knowledge of the targets is fundamental both to rational design and high-throughput screening, this is taken to be dispensable or is left to subsequent research in systems biology based drug discovery. As seen above, state-of-the-art high-throughput screening considers the maximal isolation of the target in biochemical assays as “gold-standard”, while cell-based assays are taken to be problematic because of the possibility of “off-target hits” when the test substance binds to other targets than the one into focus (von Ahsen and Bömer 2005, 481). The systems biology approaches, by contrast, seek to include rather than exclude the complexity of the disease biology and count on unexpected ‘hits’. Since it is taken to be highly problematic to infer biological effects from molecular action and since important “emergent properties” (Butcher 2005, 465) are seen to arise on the systems level, the study of isolated targets is considered to be largely futile. The systems biology approaches are directed towards drugs that act on many targets or on combinations of specific drugs with the potential for “morethan-additive” effects (van den Greef and McBurney 2005, 964-965). Altogether, the novel approaches aim at controlling the complexity of biological systems through complex interventions (Kitano 2007, 208). This is not the place to attempt to decide the dispute between the system biologists and mainstream drug discovery. The systems biology approaches are still at an early stage, and proof-of-principle for these approaches still has to be delivered (van der Greef and McBurney 2005, 966). It is therefore hard to estimate the prospects of the novel approaches: whether they are just another myth or hype, or can decisively contribute to overcoming the pharmaceutical industry’s productivity crisis. Still, the objections against mainstream drug discovery cannot be easily discarded, and even proponents of isolationist methods concede that reductionist strategies have been too simplistic (Shaffer 2005). The doubts about the development aim ‘single-substance-single-target’ are substantial and make reference to very general features of biological networks. In addition, if the claims concerning the insufficient relevance of highly isolated experimental settings for medical practice are only in parts adequate, huge research efforts would be rendered questionable. At the same time, however, the systems biology approaches leave important questions open. Also systems biology drugs would have to act through interaction with molecular targets. While the proposed search for useful combinations of existing drugs might deliver some 3 Some of the advocates of the new approaches have declared competing financial interests (Butcher 2005; van der Greef and McBurney 2005), so one has to be aware of the possibility that the dispute might also be about markets for research technology (cp. Shaffer 2005). Matthias Adam: Multi-level compexities in technological development 15 interesting new options, the systems biology approaches would ultimately have to indicate how new lead substances can be identified and optimized. Yet, there are serious doubts that non-isolationist approaches would be efficient in discovering lead substances. With in vivo screens, for instance, hits with low yet improvable affinity or not yet optimal pharmacokinetic properties are likely to remain undetected even though they could be promising leads (Lipinski and Hopkins 2004, 860). In addition, it is unclear so far how targeted optimization of leads is possible in a systems biology setting. A lead substance that comes out of a systems biology based search is likely to have a multitude of targets, with which it is likely to interact in a number of different ways. In addition, due to the holistic search strategy, the single molecular activities and their respective contributions to the overall biological effects might not to be transparent. It would seem difficult, under these conditions, to establish meaningful structureactivity relations that would allow for a targeted optimization of the biological effect. As a consequence, an elucidation of the molecular activity profile might prove necessary for optimization after all (Butcher 2005, 466). 5. Is emergence the problem? These considerations make two things clear that are particularly important for the purposes of this paper. First, it is unlikely that the strong focus on interventive complexity that is manifest in mainstream early drug discovery is efficient. Instead, in its attempt to solve the local problem of how to intervene into the diseased organism first, the systems effects of these interventions fall out of sight. In addition, many pharmaceutical options that cannot be reduced to the single-molecule-single-target pattern, but require more complex modes of intervention never come into view. Second, the systems biology approaches might address systems complexity from early on in drug development. Yet, epistemic and manipulatory access to the detailed molecular interactions remain indispensable for lead identification and optimization, and the systems biology approaches leave open how the grip onto these details can be maintained. These two observations together indicate a significant dilemma for the early stages of drug development. Sufficient attention to the local level and interventive complexity on the one hand, or to the systems level and contextual complexity on the other hand seem to be possible only at the expense of one or the other. However, an integrated attempt on both levels would be required for methodical early drug development. As shown above, the systems biologists have described the problem in terms of reduction and emergence (see, e.g., Kitano 2007, 202; Butcher 2005, 465; Horrobin 2003, 153; van der Greef and Mc Burney 2005, 961; Kubinyi 2003, 665; Van Regenmortel 2004). In their analysis, the problem with mainstream drug development is that the behavior of biological Matthias Adam: Multi-level compexities in technological development 16 systems cannot be reduced to the behavior of their molecular parts, but is emergent to the molecular level. If this were true, the sketched dilemma for drug development would be based on the very nature of biological systems. Generally speaking, reductionism holds that there is a one-sided dependence of the whole on its parts such that the properties and the behavior of the whole are determined by the properties and the behavior of the parts. This includes that at least in principle, system properties can be fully explained by the properties of the parts, while emergent system properties could not be thus explained or predicted (cp. Carrier and Finzer 2006, 272; Van Regenmortel 2004, 1016). If system properties are emergent relative to molecular properties, it would be a matter of principle independent from the state of scientific knowledge and technological capacities that systems effects cannot be predicted and explained on the basis of molecular knowledge alone. The methodological obstacles to combining our studies of complex molecular interactions and of complex biological systems would be based on an ontological divide between the two realms. Yet, the specific arguments that are adduced in favor of the systems biology approaches in drug discovery do not preclude reductionism. As seen in the previous section, systems biologists take issue with the concentration of mainstream drug discovery on isolated parts of complex biological systems and with its ideal of single-target-drugs. In terms of reductionism, the crucial discrepancy between mainstream and systems biology drug development is therefore whether system properties can be explained from the properties of isolated parts and whether systems effects are predictable from local molecular interventions. Systems biologists argue that the context is essential to define the relevant molecular properties and that it is crucial to consider the networks of mechanisms that mediate the effects of local interventions. Complex systems effects are therefore taken to require interventions at a variety of loci, which can have a more-than-additive overall effect. These arguments are compatible with the assumption that the molecular parts and their (molecular) interactions in their entirety determine and explain unidirectionally the system properties. The arguments are rather directed against local reductions and not against reductionism as a general position on the relation between molecular level and biological system properties. The possibilities are left open that the relevant context of the molecular components of the system can in principle itself be characterized on a molecular level and that system properties such as the formation of feedback-loops, failsafe mechanisms or non-linear combination laws can be explained from the 4 An exception among the cited systems biologists is Van Regenmortel 2004. Among other things, he claims that biological functional properties are essential to systems behavior, and these properties could only be explained on the basis of evolutionary history and environmental factors. Yet, Van Regenmortel’s cases come from the development of vaccines. As biological pharmaceuticals, they might raise different problems with respect to reductionism as the development of synthetic chemical drugs, which is the focus of attention of the other authors. Matthias Adam: Multi-level compexities in technological development 17 properties of the parts and their molecular context. There is thus no need to assume that the dilemma for drug discovery has an ontological basis in emergent system properties. Instead, the problem is methodological in character. The local problem of identifying and optimizing tools for pharmaceutical intervention might well be treated systematically with rational design and high-throughput screening if the efforts are concentrated on specific drug-target interaction. Yet, due to the complexity of biological systems, predictions of the overall effects of such interventions are often impossible, and it is therefore largely a matter of luck whether potent and selective molecular agents turn out to be therapeutically valuable and safe drugs. The alternative approach of concentrating on systems effects may allow the description of desirable systems effects and perhaps breaking them down to a broad range of necessary interventions at a variety of molecular targets. Yet, it remains unclear how substances with a thus specified profile could be identified or optimized in a systematic way. While one level of complexity is addressed methodically in either case, a solution to the other level seems to be dependent on lucky coincidences or otherwise unpredictable sources of knowledge or interventive capacities. No method is in sight to tackle the complexities on both levels equally. This problem arises from fairly general features of biological systems. Effectively, a complex mode of intervention on specific parts of a system (the protein targets) and a complex interplay of the parts to bring about systems behavior are sufficient to produce a fairly obstinate dilemma for technological development. It has repeatedly been described as a characteristic feature of pharmaceutical research and development that it needs to draw together scientific and technological resources from a number of different areas –chemical, biological, and clinical–, while its success often remains dependent on chance or serendipity (Maxwell and Eckhard 1990, 411). It is no surprise that novel development approaches aimed at overcoming this situation by integrating the heterogeneous knowledge base and thus making the drug development process less dependent on coincidences (Drews 1995, 936). Certainly, each of the approaches sketched in this paper can claim its successes or promises. The obstinate dilemma, however, marks the point where they seem bound to fail in their attempt to change that basic condition of drug development. This has far-reaching consequences on the epistemology of pharmaceutical research and development. At specific points of the subject area, various scientific and technological approaches can be integrated in a very fruitful way, as the combination of rational design and high-throughput screening shows. Yet on the whole, the subject area is too complex, and the different approaches are too limited and domain-specific to enable a comprehensive epistemic and technological access. Both scientific and technological methodical endeavors therefore remain isolated attempts, to an important degree, to understanding and control of the subject Matthias Adam: Multi-level compexities in technological development 18 area. Technological success thus regularly depends on coincidences that show up unexpectedly and are little understood.
منابع مشابه
***Removed from the Issue*** Application of Cell-Based Assay Systems for the Early Screening of Human Drug Hepatotoxicity in the Discovery Phase of Drug Development
متن کامل
Withdrawing of Article ''Application of Cell-Based Assay Systems for the Early Screening of Human Drug Hepatotoxicity in the Discovery Phase of Drug Development''
متن کامل
Withdrawing of Article ''Application of Cell-Based Assay Systems for the Early Screening of Human Drug Hepatotoxicity in the Discovery Phase of Drug Development''
متن کامل
***Removed from the Issue*** Application of Cell-Based Assay Systems for the Early Screening of Human Drug Hepatotoxicity in the Discovery Phase of Drug Development
متن کامل
Translating Evidence into Healthcare Policy and Practice: Single Versus Multi-Faceted Implementation Strategies – Is There a Simple Answer to a Complex Question?
How best to achieve the translation of research evidence into routine policy and practice remains an enduring challenge in health systems across the world. The complexities associated with changing behaviour at an individual, team, organizational and system level have led many academics to conclude that tailored, multifaceted strategies provide the most effective approach to knowledge translati...
متن کاملStreamlining the planning approval process for a sustainable urban development: A case study for unwinding man-made complexities
The urban development process displays regressive tacit-dominant knowledge areas and their tacit level would impede their movements during multi-level knowledge transfers among stakeholders. The accuracy of a knowledge may be distorted when recipient stakeholders fail to understand a specific knowledge for its purpose. Earlier studies by the authors had highlighted complex yet dynamic environme...
متن کامل